skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Kogler, Kevin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Autoencoders are a popular model in many branches of machine learning and lossy data com- pression. However, their fundamental limits, the performance of gradient methods and the features learnt during optimization remain poorly under- stood, even in the two-layer setting. In fact, earlier work has considered either linear autoencoders or specific training regimes (leading to vanish- ing or diverging compression rates). Our paper addresses this gap by focusing on non-linear two- layer autoencoders trained in the challenging pro- portional regime in which the input dimension scales linearly with the size of the representation. Our results characterize the minimizers of the pop- ulation risk, and show that such minimizers are achieved by gradient methods; their structure is also unveiled, thus leading to a concise descrip- tion of the features obtained via training. For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shal- low) autoencoders. Finally, while the results are proved for Gaussian data, numerical simulations on standard datasets display the universality of the theoretical predictions. 
    more » « less